language model fact-check
The Download: conspiracy-debunking chatbots, and fact-checking AI
The internet has made it easier than ever before to encounter and spread conspiracy theories. And while some are harmless, others can be deeply damaging, sowing discord and even leading to unnecessary deaths. Now, researchers believe they've uncovered a new tool for combating false conspiracy theories: AI chatbots. Researchers from MIT Sloan and Cornell University found that chatting about a conspiracy theory with a large language model (LLM) reduced people's belief in it by about 20%--even among participants who claimed that their beliefs were important to their identity The findings could represent an important step forward in how we engage with and educate people who espouse baseless theories. Google's new tool lets large language models fact-check their responses The news: Google is releasing a tool called DataGemma that it hopes will help to reduce problems caused by AI'hallucinating', or making incorrect claims.
Google's new tool lets large language models fact-check their responses
The first of the two methods is called Retrieval-Interleaved Generation (RIG), which acts as a sort of fact-checker. If a user prompts the model with a question--like "Has the use of renewable energy sources increased in the world?"--the model will come up with a "first draft" answer. Then RIG identifies what portions of the draft answer could be checked against Google's Data Commons, a massive repository of data and statistics from reliable sources like the United Nations or the Centers for Disease Control and Prevention. Next, it runs those checks and replaces any incorrect original guesses with correct facts. It also cites its sources to the user.